Deterministic nonmonotone strategies for effective training of multilayer perceptrons

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deterministic nonmonotone strategies for effective training of multilayer perceptrons

We present deterministic nonmonotone learning strategies for multilayer perceptrons (MLPs), i.e., deterministic training algorithms in which error function values are allowed to increase at some epochs. To this end, we argue that the current error function value must satisfy a nonmonotone criterion with respect to the maximum error function value of the M previous epochs, and we propose a subpr...

متن کامل

Fast training of multilayer perceptrons

Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct soluti...

متن کامل

Co-evolving Multilayer Perceptrons Along Training Sets

When designing artificial neural network (ANN) it is important to optimise the network architecture and the learning coefficients of the training algorithm, as well as the time the network training phase takes, since this is the more timeconsuming phase. In this paper an approach to cooperative co-evolutionary optimisation of multilayer perceptrons (MLP) is presented. The cooperative co-evoluti...

متن کامل

Dynamic tunneling technique for efficient training of multilayer perceptrons

A new efficient computational technique for training of multilayer feedforward neural networks is proposed. The proposed algorithm consists two learning phases. The first phase is a local search which implements gradient descent, and the second phase is a direct search scheme which implements dynamic tunneling in weight space avoiding the local trap thereby generates the point of next descent. ...

متن کامل

New Training Algorithms for Dependently Initialized Multilayer Perceptrons

Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically nonincreasing function of the number of hidden units. New training algorithms are developed where weights and thresholds from a well-trained smaller network are used to initialize a larger network. Methods are also developed to reduce the total amount of training required. It is shown...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Neural Networks

سال: 2002

ISSN: 1045-9227

DOI: 10.1109/tnn.2002.804225